protect people
Facebook and Instagram bring back facial recognition to 'protect people'
Facebook and Instagram have a problem. Well, they have many, many problems, but one of the ones they feel like addressing is "celeb-bait ads and impersonation." According to a new post from parent company Meta, the way they're going to try solving this is through the use of facial recognition technology. In the lengthy post, Meta explains that the biggest impact of these new tools will be an expanded effort to stop scam accounts from impersonating celebrities. If you've used Facebook in the last year or so, you've probably encountered friend suggestions for attractive celebrities, which are obvious fakes that can be identified by their paparazzi photos and deliberate misspellings of their names.
- Information Technology > Communications > Social Media (1.00)
- Information Technology > Artificial Intelligence > Vision > Face Recognition (0.69)
California examines benefits, risks of using artificial intelligence in state government
Artificial intelligence that can generate text, images and other content could help improve state programs but also poses risks, according to a report released by the governor's office on Tuesday. Generative AI could help quickly translate government materials into multiple languages, analyze tax claims to detect fraud, summarize public comments and answer questions about state services. Still, deploying the technology, the analysis warned, also comes with concerns around data privacy, misinformation, equity and bias. "When used ethically and transparently, GenAI has the potential to dramatically improve service delivery outcomes and increase access to and utilization of government programs," the report stated. The 34-page report, ordered by Gov. Gavin Newsom, provides a glimpse into how California could apply the technology to state programs even as lawmakers grapple with how to protect people without hindering innovation.
- North America > United States > California > Los Angeles County > Los Angeles (0.16)
- North America > United States > California > San Francisco County > San Francisco (0.05)
- Asia (0.05)
AI 'voice clone' scams increasingly hitting elderly Americans, senators warn
MikeRoweWorks Foundation CEO Mike Rowe discusses a surge in white-collar job layoffs and responds to Elon Musk's comments on working from home. Generative artificial intelligence systems are already making it easier for scammers to con elderly Americans out of their money, and several senators are asking the Biden administration to step in and protect people from this quickly emerging threat. Sen. Mike Braun, R-Ind., the top Republican on the Senate Special Committee on Aging, spearheaded a bipartisan letter to the Federal Trade Commission (FTC) on Thursday that asks for an update on what the agency knows about AI-drive scams against the elderly and what it is doing to protect people. The letter, signed by every member of the Senate committee from both parties, asks about AI-powered technology that can be used to replicate people's voices. The letter to FTC Chairwoman Lina Khan warned that voice clones and chatbots are allowing scammers to trick the elderly into making them believe they are talking to a relative or close friend, which leaves them vulnerable to theft.
- North America > United States > Indiana (0.06)
- North America > United States > Arizona (0.05)
Computer scientist aims to protect people in age of artificial intelligence
As data-driven technologies transform the world and artificial intelligence raises questions about bias, privacy and transparency, Suresh Venkatasubramanian is offering his expertise to help create guardrails to ensure that technologies are developed and deployed responsibly. "We need to protect the American people and make sure that technology is used in ways that reinforce our highest values," said Venkatasubramanian, a professor of computer science and data science at Brown University. On the heels of a recently concluded 15-month appointment as an advisor to the White House Office of Science and Technology Policy, Venkatasubramanian returned to Washington, D.C., on Tuesday, Oct. 4, for the unveiling of "A Blueprint for an AI Bill of Rights: Making Automated Systems Work for the American People," during a ceremony at the White House. Venkatasubramanian said the blueprint represents the culmination of 14 months of research and collaboration led by the Office of Science and Technology Policy with partners across the federal government, academia, civil society, the private sector and communities around the country. That collaboration informed the development of the first-ever national guidance focused on the use and deployment of automated technologies that have the potential to impact people's rights, opportunities and access to services.
Biden's AI Bill of Rights Is Toothless Against Big Tech
Last year, the White House Office of Science and Technology Policy announced that the US needed a bill of rights for the age of algorithms. Harms from artificial intelligence disproportionately impact marginalized communities, the office's director and deputy director wrote in a WIRED op-ed, and so government guidance was needed to protect people against discriminatory or ineffective AI. Today, the OSTP released the Blueprint for an AI Bill of Rights, after gathering input from companies like Microsoft and Palantir as well as AI auditing startups, human rights groups, and the general public. Its five principles state that people have a right to control how their data is used, to opt out of automated decision-making, to live free from ineffective or unsafe algorithms, to know when AI is making a decision about them, and to not be discriminated against by unfair algorithms. "Technologies will come and go, but foundational liberties, rights, opportunities, and access need to be held open, and it's the government's job to help ensure that's the case," Alondra Nelson, OSTP deputy director for science and society, told WIRED.
UK fines Clearview AI £7.5M for scraping citizens' data
Clearview AI has been fined £7.5 million by the UK's privacy watchdog for scraping the online data of citizens without their explicit consent. The controversial facial recognition provider has scraped billions of images of people across the web for its system. Understandably, it caught the attention of regulators and rights groups from around the world. In November 2021, the UK's Information Commissioner's Office (ICO) imposed a potential fine of just over £17 million on Clearview AI. Today's announcement suggests Clearview AI got off relatively lightly.
- Oceania > Australia (0.19)
- Europe > United Kingdom (0.08)
- North America > United States > California (0.06)
- Europe > Netherlands > North Holland > Amsterdam (0.06)
- Information Technology > Security & Privacy (1.00)
- Law (0.99)
UK fines Clearview just under $10M for privacy breaches – TechCrunch
The UK's data protection watchdog has confirmed a penalty for the controversial facial recognition company, Clearview AI -- announcing a fine of just over £7.5 million today for a string of breaches of local privacy laws. The watchdog has also issued an enforcement notice, ordering Clearview to stop obtaining and using the personal data of UK residents that is publicly available on the internet; and telling it to delete the information of UK residents from its systems. The US company has amassed a database of 20 billion facial images by scraping data off the public internet, such as from social media services, to create an online database that it uses to power an AI-based identity-matching service which it sells to entities such as law enforcement. The problem is Clearview has never asked individuals whether it can use their selfies for that. And in many countries it has been found in breach of privacy laws.
- Europe > United Kingdom (1.00)
- North America > United States > Illinois (0.07)
- Oceania > Australia (0.05)
- (3 more...)
- Law > Civil Rights & Constitutional Law (1.00)
- Information Technology > Security & Privacy (1.00)
- Information Technology > Security & Privacy (1.00)
- Information Technology > Artificial Intelligence > Vision > Face Recognition (0.58)
- Information Technology > Communications > Social Media (0.38)
Key Markets for Edge AI in 2022
AI solutions have already been implemented across industries, helping to increase efficiency, reduce costs, improve safety, and more. New advancements in edge AI processing are now enabling companies to take AI applications to the next level with multiple large, complex deep neural networks (DNNs). Analog compute-in-memory is a revolutionary new approach that is bringing incredible performance, power, and cost advantages to the edge AI industry. Here are predictions for three key markets that will be reshaped by this new approach to edge AI processing. AI is starting to completely transform the market for video security, which includes measures to deny unauthorized access and protect personnel/property.
EU Parliament, countries want more innovation, less burden in AI Act
An internal report on Artificial Intelligence recently approved by a special committee of the European Parliament embodies a push from EU lawmakers and member states to make regulation on artificial intelligence less burdensome and more innovation-friendly. Christian Democrat MEP Axel Voss has been leading the charge against "overburdening" companies with excessive regulation, arguing that the EU regulatory environment should leave more room for innovation. That was the underlying motive of an own-initiative report on Artificial Intelligence in a Digital Age, recently approved in the AIDA committee, a parliamentary body set up in 2020, under Voss' leadership. "We need a better regulatory framework that learns also from the mistakes of the GDPR," Voss said while presenting the report. Instead of overburdening companies, the AI Act should give clear guidance and should leave space for innovation, he added.
- Law (1.00)
- Government > Regional Government > Europe Government (1.00)
- Information Technology > Security & Privacy (0.75)
EU: Artificial Intelligence Regulation Threatens Social Safety Net, Warns HRW
The European Union's plan to regulate artificial intelligence is ill-equipped to protect people from flawed algorithms that deprive them of lifesaving benefits and discriminate against vulnerable populations, Human Rights Watch said in report on the regulation. The European Parliament should amend the regulation to better protect people's rights to social security and an adequate standard of living. The 28-page report in the form of a question-and-answer document, "How the EU's Flawed Artificial Intelligence Regulation Endangers the Social Safety Net," examines how governments are turning to algorithms to allocate social security support and prevent benefits fraud. Drawing on case studies in Ireland, France, the Netherlands, Austria, Poland, and the United Kingdom, Human Rights Watch found that this trend toward automation can discriminate against people who need social security support, compromise their privacy, and make it harder for them to qualify for government assistance. But the regulation will do little to prevent or rectify these harms.
- Europe > Austria (0.27)
- Europe > Netherlands (0.26)
- Europe > United Kingdom (0.25)
- (2 more...)
- Law > Civil Rights & Constitutional Law (1.00)
- Government (1.00)